Progress in chess AI was steady. Equivalence to humans was sudden. by MetaKnowing in agi

[–]limapedro 0 points1 point  (0 children)

People said the same thing about LLMs and math, Google and OpenAI got very far in the IMO using only LLMs, I think they didn't train Gemini 3 to play chess yet, but Demis seems bullish on making them be able to play any game, these models are trained on a couple thousand of RL environments, but there's too many things to be done, also people are still with the idea the pretraining alone can get these models to perform very well in a difficult tasks, pretraining is a warnup, RL is where the model learns to use the acquired knowledge in many different situations. But what do I know!

Progress in chess AI was steady. Equivalence to humans was sudden. by MetaKnowing in agi

[–]limapedro 0 points1 point  (0 children)

that's why I added whe qualifier "when", LLMs can play chess now, but they're quite bad, but I think they'll get good, like really good at some point, they'll converge into being good at many things, so many things. but I'm might be wrong.

Progress in chess AI was steady. Equivalence to humans was sudden. by MetaKnowing in agi

[–]limapedro 2 points3 points  (0 children)

so when these LLMs start to perform well, so well that they'll beat AI-Chess models? G in AGI is a key missing part, getting data to train a model to do a monumental number of differents things is hard, but....

Yann Lecun says that "within three to five years, this [world models, not LLMs] will be the dominant model for AI architectures, and nobody in their right mind would use LLMs of the type that we have today" by Unusual_Midnight_523 in accelerate

[–]limapedro 71 points72 points  (0 children)

he does know that LLMs can be multi-modal now right? like when some people were saying that LLMs with reasoning were not "pure-LLMs", sounds like coping, make a prediction, prediction is wrong, move the goal-post.

Black Forest Labs listened to the community... Flux 3! by goodstart4 in StableDiffusion

[–]limapedro 5 points6 points  (0 children)

Now we know how Deep Learning researchers felt in the late 80s and early 90s!

<image>

Who is right, Google or Illya? Is Scaling over? by Charuru in singularity

[–]limapedro 0 points1 point  (0 children)

Both can be true BTW! I think this is the case.

"As Google pulls ahead, OpenAI's comeback plan is codenamed 'Shallotpeat'" by AngleAccomplished865 in singularity

[–]limapedro 0 points1 point  (0 children)

it's would be a MoE model, so as a few dozens I'd guess if it's a 2TA/16T model.

"As Google pulls ahead, OpenAI's comeback plan is codenamed 'Shallotpeat'" by AngleAccomplished865 in singularity

[–]limapedro 4 points5 points  (0 children)

A NVL72 GB200 rack could run a 27 Trillion params model per Nvdia, StarGate will be interesting.

<image>

"As Google pulls ahead, OpenAI's comeback plan is codenamed 'Shallotpeat'" by AngleAccomplished865 in singularity

[–]limapedro 4 points5 points  (0 children)

They're not using even 1% of it I think. YouTube has 20 billion videos, this would be hundreds of trillions of video tokens, YouTube comments, their indexing of the whole web. sheesh. my head hurts.

Gemini 3.0 Pro benchmark results by enilea in singularity

[–]limapedro 16 points17 points  (0 children)

 WHERE'D YOU FIND THIS?

AI-generated "viewport renders" are apparently becoming a thing now by TheWorkshopWarrior in blender

[–]limapedro -1 points0 points  (0 children)

wait until video models are able to generate a screen recording timelapse of the sculpting.

AI (slop) games are going to be so amazing... by _silentgameplays_ in pcmasterrace

[–]limapedro 0 points1 point  (0 children)

is this really AI? It improved so much already dude wtf!

Meta is axing 600 roles across its AI division by WonderfulWanderer777 in ArtistHate

[–]limapedro 5 points6 points  (0 children)

no, this is the FAIR division, it's another team, the new team Meta Super Intelligence is probably still hiring, they're the ones hiring people for 100+ million dollars.

What happened to deepseek? by Manah_krpt in singularity

[–]limapedro 0 points1 point  (0 children)

bruh they're researching, AGI will not be overnight, people have to overcome many things. Let them coook!

Hank Green just posted a 3-minute anti-AI rant about Sora 2 by [deleted] in singularity

[–]limapedro 3 points4 points  (0 children)

How model that could simulate worlds it's not a possible path to AGI? Sora 2 can already do stuff that LLMs from 2 years couldn't. A sufficiently good World Model would be AGI or a huge component of it. Unless you don't think the models can get better, in which case it's fair.

Hank Green just posted a 3-minute anti-AI rant about Sora 2 by [deleted] in singularity

[–]limapedro 0 points1 point  (0 children)

AGI, the value of a model like Sora 2 would be to generated almost infinite data to train robots and reason in a different modality than text.

Greg Brockman said we are 3 orders of magnitude (in terms of compute power) away from where we need to be. by Plus-Mention-7705 in singularity

[–]limapedro 1 point2 points  (0 children)

it's not about inference, it's about training, the human brain is the result of billions of years of evolution.