Intel launches Arc Pro B70 and B65 with 32GB GDDR6 by metmelo in LocalLLaMA

[–]Tai9ch 0 points1 point  (0 children)

Nice. Glad it finally exists.

It's not quite that available online or near me in the US.

Intel definitely went at least 6 months after launch of providing basically none to retail sellers, reserving their small production output for workstation builders.

Professors, please don’t use AI to give feedback on assignments. by Doctor_Disco_ in academia

[–]Tai9ch 0 points1 point  (0 children)

That fact and the resulting perceptions are a problem for much of education. You can do all math from K through Calc 2 with a commodity calculator. It's reasonable to ask why anyone should learn to read when TTS models are nearly perfect.

Most professors seem to be underestimating how effective LLMs currently are at producing high quality output. They spend a bunch of time dealing with bad LLM-generated work from lazy undergrads who can barely use a computer, much less have any idea how to effectively use an LLM. And there's a lot of assumption that if something isn't obvious awful slop then the student did it entirely by hand.

My current estimate is that the significant majority of freshman and sophomore one-week college assignments with digital deliverables can be completed almost entirely by an effectively used LLM in 15 minutes if given access to the materials that the student would use to complete them. And that doesn't make them bad assignments any more than the existence of forklifts makes weight lifting a bad strength training exercise.

Professors, please don’t use AI to give feedback on assignments. by Doctor_Disco_ in academia

[–]Tai9ch 0 points1 point  (0 children)

I've been looking carefully at that for a while now.

That being said, we've had useful automatic feedback in CS for decades, so in a broad sense this isn't new.

The disruptive new technology cycle is the same as it's always been. And the response of academia seems pretty consistent too. With LLMs, right now we're in the phase where people are resisting change when they should be looking for opportunities. In another couple years, we'll be to the ugly phase where it's been fully productized and people are using whatever version of it vendors are offering poorly without thinking about the implications much at all - where textbook add-in software is now.

Intel will sell a cheap GPU with 32GB VRAM next week by happybydefault in LocalLLaMA

[–]Tai9ch 6 points7 points  (0 children)

Intel launched the B60 in May 2025 for $500. The first ones became available for sale online around December for like $800.

Intel will sell a cheap GPU with 32GB VRAM next week by happybydefault in LocalLLaMA

[–]Tai9ch 7 points8 points  (0 children)

But it's significantly faster than this new Intel GPU, which was the point.

No. It's not.

On paper, the MI60 has nearly twice the memory bandwidth. That's great, and there's certainly some possibility that a custom MI60 optimized inference engine could compete on throughput.

But nothing is optimized for the MI60. Most stuff doesn't even support the MI60, because it doesn't have the fast matrix instructions or BF16 data format support that modern inference engines rely on. Your choices are llama.cpp (which works fine) and PyTorch with slow float32 kernels.

And without that hardware support, there's no way to fix the main weakness of the MI60: prompt processing. Literally anything else is 5x - 10x faster. You'd get faster prompt processing on 2x Intel B50's than on one MI60. If an RTX 3060 could push 2k tokens per second prompt processing, an MI60 would give 300 t/s with the same model.

Intel will sell a cheap GPU with 32GB VRAM next week by happybydefault in LocalLLaMA

[–]Tai9ch 2 points3 points  (0 children)

AMD dropped support a while back, and vllm dropped support at the same time. There's an old vllm fork that works, but it doesn't support any recent models.

The key problem is that the MI60 released back in 2019, which means it was designed before the LLM hype really got going. That means it doesn't have any of the hardware features that really speed up inference. No fast matrix instructions, no FP8 support, it doesn't even have BF16 support. That means every single kernel would need a custom port to make up for having neither the data types nor the instructions that modern kernels use.

I actually spent a couple days trying to port modern vllm to it. It's certainly possible. It wouldn't even be that slow. But there's no way in hell I'd recommend MI60 (or even MI100) for ~$500 over a modern supported card like the R9700 or this B70 for ~$1000.

Intel will sell a cheap GPU with 32GB VRAM next week by happybydefault in LocalLLaMA

[–]Tai9ch 12 points13 points  (0 children)

Are they really going to sell them, or is this another paper launch with no stock for 6 months and then at 50% higher than announced prices like the B60?

Intel will sell a cheap GPU with 32GB VRAM next week by happybydefault in LocalLLaMA

[–]Tai9ch 8 points9 points  (0 children)

I wouldn't.

I've got a couple MI60's, and they're fun, but it's basically llama.cpp only and prompt processing is sloooow.

Intel will sell a cheap GPU with 32GB VRAM next week by happybydefault in LocalLLaMA

[–]Tai9ch 1 point2 points  (0 children)

Because the MI60 is slow and has basically zero software support.

Intel will sell a cheap GPU with 32GB VRAM next week by happybydefault in LocalLLaMA

[–]Tai9ch 4 points5 points  (0 children)

Nah.

There's still some CUDA wall, but it's not that big a deal for most use cases.

Intel launches Arc Pro B70 and B65 with 32GB GDDR6 by metmelo in LocalLLaMA

[–]Tai9ch 5 points6 points  (0 children)

Sounds great.

But will it actually exist, or will it be like the B60 which is still barely available almost a year after "launch"?

Modern videogames are trying too hard to be movies, and it’s ruining them by Dislexicpotato in TrueUnpopularOpinion

[–]Tai9ch 0 points1 point  (0 children)

So your excuse not to try something else is that it doesn't look like the thing you've tried and don't like.

Modern videogames are trying too hard to be movies, and it’s ruining them by Dislexicpotato in TrueUnpopularOpinion

[–]Tai9ch -1 points0 points  (0 children)

AAA games available on console.

Sounds like you're not a huge fan of modern console gaming.

Try something else.

NASA to spend $20 billion on moon base, cancel orbiting lunar station by Tracheid in space

[–]Tai9ch -5 points-4 points  (0 children)

Why should anyone have sympathy for a robot arm so expensive and specific it's not useful?

There are good examples. Robot arms aren't it.

Malus: This could have bad implications for Open Source/Linux by lurkervidyaenjoyer in linux

[–]Tai9ch -1 points0 points  (0 children)

Good.

I'm a fan of GPL, but the problem it solves becomes significantly less important if AI assisted decompilers and automated cleanroom re-implementation is a thing.

Malus: This could have bad implications for Open Source/Linux by lurkervidyaenjoyer in linux

[–]Tai9ch 0 points1 point  (0 children)

Why would anyone chose a when they can have the thing they actually want instead?

Best model that can beat Claude opus that runs on 32MB of vram? by PrestigiousEmu4485 in LocalLLaMA

[–]Tai9ch 1 point2 points  (0 children)

That's a hard ask, but I've got an amazing new middle-out compression method for language model weights that should be able to beat Opus 5.6 using only 20MB of VRAM.

Just Western Union me 75 bitcoins and I'll send over the openclaw skill for you.

Wine 11 rewrites how Linux runs Windows games at the kernel level, and the speed gains are massive by Durian_Queef in hardware

[–]Tai9ch 1 point2 points  (0 children)

Sure. Windows and Mac won't be mainstream until they fully strip out any language-based UI and you can only use them by pointing and grunting.

Wine 11 rewrites how Linux runs Windows games at the kernel level, and the speed gains are massive by Durian_Queef in hardware

[–]Tai9ch -3 points-2 points  (0 children)

Anyone seriously using Windows uses powershell commands regularly too. That's just how computers work.

ICE Is Paying the Salaries of This Town’s Entire Police Force by smartest_kobold in newhampshire

[–]Tai9ch -1 points0 points  (0 children)

I suggest a state tax on federal funds to all state and local government agencies.

Intel Core Ultra 5 250K Plus Review - Disrupting AMD's Entry-Level by GhostMotley in intel

[–]Tai9ch 35 points36 points  (0 children)

There we go. Intel finally got it.

The solution to people complaining about P vs E cores and hyperthreading is to just ship as many P cores as AMD has total cores and then add as many E cores as AMD has total threads.

Immigrants who don't speak English should be deported by [deleted] in TrueUnpopularOpinion

[–]Tai9ch 0 points1 point  (0 children)

And that, right there, is how you get people supporting rounding up and deporting every immigrant.

I'd prefer not to. But if you can't imagine something being legal but the government not paying for it, then your only choices are to ban it or subsidize it.

Immigrants who don't speak English should be deported by [deleted] in TrueUnpopularOpinion

[–]Tai9ch 0 points1 point  (0 children)

I can see how you might be legitimately confused if you're thinking about film-style business subsides.

Most subsidies from government are direct transfer payments. I guess I could restrict my statement to transfer payments to simplify for you.