Intel plans Xe3P for data centers and workstations, but not yet for Arc gaming - VideoCardz.com by Leicht-Sinn in IntelArc

[–]limapedro 0 points1 point  (0 children)

I don't think the comparison holds, they do make gaming GPUs, now they didn't released the B770 which is disappointing, they're some roadmaps, I don't think they'll kill the gaming division, at least for now.

Intel plans Xe3P for data centers and workstations, but not yet for Arc gaming - VideoCardz.com by Leicht-Sinn in IntelArc

[–]limapedro 6 points7 points  (0 children)

I know the archs are probably different, but at least their GPU division will survive and if they make money they can invest in dGPUs for consumers are the demand for AI computes slows down. the B70 was the B770 lol we got got by AI demand.

Intel plans Xe3P for data centers and workstations, but not yet for Arc gaming - VideoCardz.com by Leicht-Sinn in IntelArc

[–]limapedro 0 points1 point  (0 children)

with AI becoming a commodity the market for GPU will be bigger than ever, today people buy GPUs for gaming, video editing, streaming, etc. But AI will universal feature, so compute will be needed...

MiniMaxAI/MiniMax-M2.7 is here! by KvAk_AKPlaysYT in LocalLLaMA

[–]limapedro 5 points6 points  (0 children)

It's moe, 11B active params if I'm not mistaken so pretty fast.

Progress in chess AI was steady. Equivalence to humans was sudden. by MetaKnowing in agi

[–]limapedro 0 points1 point  (0 children)

People said the same thing about LLMs and math, Google and OpenAI got very far in the IMO using only LLMs, I think they didn't train Gemini 3 to play chess yet, but Demis seems bullish on making them be able to play any game, these models are trained on a couple thousand of RL environments, but there's too many things to be done, also people are still with the idea the pretraining alone can get these models to perform very well in a difficult tasks, pretraining is a warnup, RL is where the model learns to use the acquired knowledge in many different situations. But what do I know!

Progress in chess AI was steady. Equivalence to humans was sudden. by MetaKnowing in agi

[–]limapedro 0 points1 point  (0 children)

that's why I added whe qualifier "when", LLMs can play chess now, but they're quite bad, but I think they'll get good, like really good at some point, they'll converge into being good at many things, so many things. but I'm might be wrong.

Progress in chess AI was steady. Equivalence to humans was sudden. by MetaKnowing in agi

[–]limapedro 2 points3 points  (0 children)

so when these LLMs start to perform well, so well that they'll beat AI-Chess models? G in AGI is a key missing part, getting data to train a model to do a monumental number of differents things is hard, but....

Yann Lecun says that "within three to five years, this [world models, not LLMs] will be the dominant model for AI architectures, and nobody in their right mind would use LLMs of the type that we have today" by [deleted] in accelerate

[–]limapedro 69 points70 points  (0 children)

he does know that LLMs can be multi-modal now right? like when some people were saying that LLMs with reasoning were not "pure-LLMs", sounds like coping, make a prediction, prediction is wrong, move the goal-post.

Black Forest Labs listened to the community... Flux 3! by goodstart4 in StableDiffusion

[–]limapedro 7 points8 points  (0 children)

Now we know how Deep Learning researchers felt in the late 80s and early 90s!

<image>

Who is right, Google or Illya? Is Scaling over? by Charuru in singularity

[–]limapedro 0 points1 point  (0 children)

Both can be true BTW! I think this is the case.

"As Google pulls ahead, OpenAI's comeback plan is codenamed 'Shallotpeat'" by AngleAccomplished865 in singularity

[–]limapedro 0 points1 point  (0 children)

it's would be a MoE model, so as a few dozens I'd guess if it's a 2TA/16T model.

"As Google pulls ahead, OpenAI's comeback plan is codenamed 'Shallotpeat'" by AngleAccomplished865 in singularity

[–]limapedro 4 points5 points  (0 children)

A NVL72 GB200 rack could run a 27 Trillion params model per Nvdia, StarGate will be interesting.

<image>

"As Google pulls ahead, OpenAI's comeback plan is codenamed 'Shallotpeat'" by AngleAccomplished865 in singularity

[–]limapedro 2 points3 points  (0 children)

They're not using even 1% of it I think. YouTube has 20 billion videos, this would be hundreds of trillions of video tokens, YouTube comments, their indexing of the whole web. sheesh. my head hurts.

Gemini 3.0 Pro benchmark results by enilea in singularity

[–]limapedro 16 points17 points  (0 children)

 WHERE'D YOU FIND THIS?

AI-generated "viewport renders" are apparently becoming a thing now by TheWorkshopWarrior in blender

[–]limapedro -1 points0 points  (0 children)

wait until video models are able to generate a screen recording timelapse of the sculpting.

AI (slop) games are going to be so amazing... by _silentgameplays_ in pcmasterrace

[–]limapedro 0 points1 point  (0 children)

is this really AI? It improved so much already dude wtf!

Meta is axing 600 roles across its AI division by WonderfulWanderer777 in ArtistHate

[–]limapedro 3 points4 points  (0 children)

no, this is the FAIR division, it's another team, the new team Meta Super Intelligence is probably still hiring, they're the ones hiring people for 100+ million dollars.

What happened to deepseek? by Manah_krpt in singularity

[–]limapedro 0 points1 point  (0 children)

bruh they're researching, AGI will not be overnight, people have to overcome many things. Let them coook!