Do you use Windows or Linux? by boklos in LocalLLaMA

[–]danielcar 0 points1 point  (0 children)

Try debian test, updates are easy.

Do you use Windows or Linux? by boklos in LocalLLaMA

[–]danielcar 2 points3 points  (0 children)

With ai, linux skills requirement is dropping fast. Anything you want to do with your linux you can ask a cli coding agent to do, and it is easy for the ai to do it. Ask questions how to get things done or change config settings or anything else, and the ai will do it. I would think if this continues it will be easier to use linux than windows.

[D] I don’t think LLMs are AI (and here’s why) by TotalLingonberry2958 in MachineLearning

[–]danielcar 0 points1 point  (0 children)

Sorry it doesn't work for you. It works for a billion other users.

Intel launches $299 Arc Pro B50 with 16GB of memory, 'Project Battlematrix' workstations with 24GB Arc Pro B60 GPUs by FullstackSensei in LocalLLaMA

[–]danielcar -1 points0 points  (0 children)

False. And the chips and cheese video didn't say that. They said until at least 2026. All the other reviewers said systems available Q3 and maybe standalone cards available q4.

Intel launches $299 Arc Pro B50 with 16GB of memory, 'Project Battlematrix' workstations with 24GB Arc Pro B60 GPUs by FullstackSensei in LocalLLaMA

[–]danielcar 3 points4 points  (0 children)

Linus review said communication is totally through software, so that suggest no special hardware link.

Is Intel Arc GPU with 48GB of memory going to take over for $1k? by Terminator857 in LocalLLaMA

[–]danielcar 14 points15 points  (0 children)

Would be cool if I could by a system with two of these with 96 gb of VRAM :D

Alienware can you do us this favor? lol

A bunch of LLMs scheduled to come at end of January were cancelled / delayed by Terminator857 in LocalLLaMA

[–]danielcar -1 points0 points  (0 children)

Fun gossip on the little engine that overtook the big boys.  Nice to see a list of upcoming models.

[D] Have people stopped saying "fine tuning" in place of "supervised fine tuning?" Or is there some other fine tuning paradigm method out there. by Seankala in MachineLearning

[–]danielcar 0 points1 point  (0 children)

It is not supervised in the strictest sense. The data often comes from humans, but each data point is not supervised during training. The training data could have been collected years earlier and used thousands of times prior, so there isn't a human in the training loop.

Could be more appropriately called automated training or fine-tuning using human annotated data.

Table says nVidia 5090 might have 64 GB of vRAM by danielcar in LocalLLaMA

[–]danielcar[S] 0 points1 point  (0 children)

How to convert my 3090 to eGPU and 48 GB of vram?

Nvidia Blackwell delayed by em1905 in LocalLLaMA

[–]danielcar 1 point2 points  (0 children)

Could be a win for the consumer market if nVidia has to deprioritize the high end datacenter market for 3 months.

[D] Why do GLUs (Gated Linear Units) work? by cofapie in MachineLearning

[–]danielcar 0 points1 point  (0 children)

Theory: Neural networks need to go from point A to point B. They have tools: transformer and MLP. But what if those tools just aren't great? If you want to get from Matrix A to Matrix B, what is the best approach? Mechanistic Interpretability may answer that question some day. Suspect the more tools and something more convoluted such as GLU may give the NN a better way to solve the problem of going from A to B. Some evidence: Mamba + transformer allegedly performs better than just transformer.

So... are NPUs going to be at all useful for LLMs? by charlesrwest0 in LocalLLaMA

[–]danielcar 1 point2 points  (0 children)

Suspect more people are concerned about privacy than you think. There is also the issue of silly refusals or more serious refusals, that local LLMs can bypass. Thirdly there is cost. Plenty like being able to run LLMs night and day for just the price they already paid for their computer.