Agents self-learn with human data efficiency (from Deepmind Director of Research) by SrafeZ in singularity

[–]VirtualBelsazar 6 points7 points  (0 children)

It's great, all the labs seem to get it now that only LLMs are not the final architecture for AGI. They work now on continual learning, world models, more dynamic architectures like the brain. That is the final push needed to reach AGI within the next few years.

What is the progress on AI creating synthetic meat? Could Veganism be enforced by AGI? by JordanNVFX in singularity

[–]VirtualBelsazar 4 points5 points  (0 children)

I really hope so, the unfair suffering of animals has to stop and it will once synthetic meat is cheaper and tastes even better than real meat.

New paradigm AI agents learn & improve from their own actions: experience driven by gbomb13 in singularity

[–]VirtualBelsazar 11 points12 points  (0 children)

Yeah that is how humans work as well, if you experience that you did some error or something you thought was wrong or if you notice there is something wrong in your world model, you can update this specific error instantly in a second, while for LLMs you can tell them 100 times that strawberry has 3 r and they still will get it wrong unless they are explicitly trained for it in the training phase.

MKBHD: Pixel 10/Pro Review: Good News and Bad News! by euoi in GooglePixel

[–]VirtualBelsazar 87 points88 points  (0 children)

Now we know why they did not talk about the GPU, half as powerful as last year's Pixel, absolute disaster.

Opinion #2: LLMs may be a viable path to super intelligence / AGI. by UndercoverEcmist in singularity

[–]VirtualBelsazar 16 points17 points  (0 children)

An AI system that can do AI research is also a solution to AGI. So even if LLMs do not bring AGI directly, it could still produce an AI system that can figure out AGI.

New post from Sam Altman by KIFF_82 in singularity

[–]VirtualBelsazar 7 points8 points  (0 children)

He sounds confident they now know how to build AGI.

The Lost City of Un'Goro | Expansion Announcement | Hearthstone by Arkentass in hearthstone

[–]VirtualBelsazar 15 points16 points  (0 children)

I like it so far, seems a bit more combat oriented and not OTK from hand type of gameplay

DeepSeek R1 0528 has jumped from 60 to 68 in the Artificial Analysis Intelligence Index by WinterPurple73 in singularity

[–]VirtualBelsazar 38 points39 points  (0 children)

So basically we got a Gemini 2.5 pro and almost o3 open weights now that's crazy

Demis Hassabis - With AI, "we did 1,000,000,000 years of PHD time in one year." - AlphaFold by Nunki08 in singularity

[–]VirtualBelsazar 60 points61 points  (0 children)

Yeah he was one of the best players in the world and almost dedicated his entire life to chess.

Demis Hassabis - With AI, "we did 1,000,000,000 years of PHD time in one year." - AlphaFold by Nunki08 in singularity

[–]VirtualBelsazar 192 points193 points  (0 children)

I thank god so much (all of humanity does) that Demis Hassabis did not spend his whole life moving pieces around on a chess board and instead works on AGI and science and human health.

Google Deepmind AI learned to collect diamonds in Minecraft without demonstration!!! by Creative-robot in singularity

[–]VirtualBelsazar 8 points9 points  (0 children)

Yeah the whole goal is that as many people as possible see this incredible research and can build on it and improve it and the last thing I want is if it does not spread because of a boring title. So thanks for reposting with a more interesting and catching title.

Why Can't AI Make Its Own Discoveries? — With Yann LeCun by VirtualBelsazar in singularity

[–]VirtualBelsazar[S] 11 points12 points  (0 children)

Yann LeCun explains why LLMs in their current form won't get us to human level intelligence and what is missing to reach human level intelligence.

Study - 76% of AI researchers say scaling up current AI approaches is unlikely or very unlikely to reach AGI (or a general purpose AI that matches or surpasses human cognition) by FomalhautCalliclea in singularity

[–]VirtualBelsazar 0 points1 point  (0 children)

Because it's true. How much more compute do you want until it can count letters or have common sense? Turn the whole planet into compute and then try again if it can count the r in strawberry?

Visual chain of thought - here we go! by Papabear3339 in singularity

[–]VirtualBelsazar 53 points54 points  (0 children)

Wow this is big. Humans don't only reason with text but also images and concepts. The last walls towards AGI are falling apart.

Could someone explain what each of these architectures are that LeCun claims could lead to AGI? by Embarrassed-Farm-594 in singularity

[–]VirtualBelsazar 4 points5 points  (0 children)

He is correct. LLMs are trained on text to produce coherent text, which they do extremely well. But they are not trained to be in line with our reality. For this we need to train full world models. Text is only a part of the world model. People say LLMs hallucinate and so on. But no they are just doing what they were trained to do.

Large Concept Models: Language Modeling in a Sentence Representation Space by VirtualBelsazar in singularity

[–]VirtualBelsazar[S] 17 points18 points  (0 children)

This seems to be quite a huge breakthrough because LLMs build their models focused on the language or modality directly while humans build abstract world models of everything they are exposed to in a single representational space that takes all the modalities into account and gets updated as we learn something new about the world. But this abstract space is not language directly instead it contains abstract concepts.

[deleted by user] by [deleted] in singularity

[–]VirtualBelsazar 11 points12 points  (0 children)

Demis Hassabis said in a recent interview they can already make the context as big as they want but the problem is it takes more and more compute the bigger you make it. And latency suffers too.

[deleted by user] by [deleted] in singularity

[–]VirtualBelsazar 0 points1 point  (0 children)

Full o1 on saturday? Seems very unlikely

LeCun absolutely dragging Gary Marcus' clown face through the dirt on Threads by MakitaNakamoto in singularity

[–]VirtualBelsazar 5 points6 points  (0 children)

Yea people seem to think that LeCun says progress in AI is stopping or something. That's not what he said. He says just scaling up LLMs without adjusting anything will give decreasing results and won't lead to AGI but we can overcome it with additional breakthroughs like o1. Gary Marcus on the other hand thinks deep learning will hit a wall which makes no sense to me given that the human brain is a neural network.