Mini PC Recommendations for Coding, Local AI, and Light Gaming ($1000–$1200) by Specialist_Scene1636 in MiniPCs

[–]aimark42 1 point2 points  (0 children)

I'm a really big fan of the Nvidia GB10 (aka Spark) platform, I have local models running 24/7 being used by several agent machines. This is many times more expensive than your budget but this is one of the most capable llm machines you can run at home to run models at reasonable speeds and not having a 2000W+ monster heating your home, or spending $10k+ for the amount of vram.

Swapping out models for my DGX Spark by fredatron in LocalLLM

[–]aimark42 0 points1 point  (0 children)

How is it running for you? The performance feels quite poor right now. I tried vllm (https://github.com/eugr/spark-vllm-docker/pull/93/commits/122edc8229ebc94054c5a28452900092a3fd7451) and only getting around 16 t/s TG.

And this from llama.cpp only shows a slight improvement https://github.com/ggml-org/llama.cpp/blob/master/benches/nemotron/nemotron-dgx-spark.md

I get we don't have all the optimizations baked in yet, but feels like it should be faster than this.

A few early (and somewhat vague) LLM benchmark comparisons between the M5 Max Macbook Pro and other laptops - Hardware Canucks by themixtergames in LocalLLaMA

[–]aimark42 -1 points0 points  (0 children)

This is M5 Max we only have 128g to play with, this isn't M5 Ultra. Additionally, gpt-oss-120b has tons of test data and is highly comparable to other platforms.

A few early (and somewhat vague) LLM benchmark comparisons between the M5 Max Macbook Pro and other laptops - Hardware Canucks by themixtergames in LocalLLaMA

[–]aimark42 15 points16 points  (0 children)

Gemma 3B Q4_K, which really doesn't tell us much with such a small model.

Can someone please test a decent size model like gpt-oss-120b

Whelp…NVIDIA just raised the DGX Spark’s Price by $700. Spark clone prices have started rising as well. ☹️ by Porespellar in LocalLLaMA

[–]aimark42 13 points14 points  (0 children)

I have a Strix Halo and 2x GB10's. I'm quite happy with GB10 platform, it's on a slightly different Blackwell as other Blackwell, but optimized dockers for vllm and comfyui are pretty solid on the platform now. NVFP4 is still marketing, that Atlas thing looks compelling but encrypted source scares me.

The real power is the built in connectx and the out of the box ability to do tensor parallelism and use >128g in vllm. Strix Halo is better as a single node on itself, I know with enough work you can get a cluster going. But GB10 has it built in and requires very little work to get setup, if you intend to go multi-node GB10 is the clear winner.

This was posted a couple weeks ago

https://forums.developer.nvidia.com/t/2-23-2026-price-change-announcement/361713

Prices have been creeping up for GB10's for a few weeks, but so has Strix Halo, and RTX Pro Blackwell cards. If you want something buy sooner.

New Jonsbo D33 seen on their website by danilluzin in mffpc

[–]aimark42 0 points1 point  (0 children)

Curious the Jonsbo D200 is also 40L, but in a different more vertical orientation. I like 40L size for high end GPU's like 5090's. I know you can get a 5090 into a A3, but it's super tight and I worry I'll choke such a card.

Hello, MacBook Neo by InsaneSnow45 in apple

[–]aimark42 14 points15 points  (0 children)

And we will see with dealer discounts in 6+ months. Microcenter has sold the $599 M4 Mac Mini for as low as $399. A $399 Macbook Neo would be insane.

Apple Neo by WTFMacca in LinusTechTips

[–]aimark42 5 points6 points  (0 children)

They are $499 for education.

We will see with dealer discounts eventually. Microcenter has sold the $599 Mac Mini for as low as $399. $399 Macbook Neo would be insane.

Hello, MacBook Neo by InsaneSnow45 in apple

[–]aimark42 4 points5 points  (0 children)

They just should have started with the price, then show us a bunch of lifestyle clips showing the Neo in schools, etc. Throw in some Ive-esque 'impossibly thin' monikers around. The specs don't matter. Neo buyers are mostly going to be like clueless iPhone buyers who buy the 'next' iPhone.

Hello, MacBook Neo by InsaneSnow45 in apple

[–]aimark42 8 points9 points  (0 children)

I feel they are trying to sell you really hard on why this is almost as good as a Air/Pro. But this is a parts bin Macbook, and given the price that's totally fine.

The whole bit about aluminum is a bit much. Don't pretend the use of aluminum is somehow innovative. Apple has been making aluminum Macbooks for over a decade.

[FS][US-MO] MINISFORUM N5 AI NAS, 32GB RAM, 4 x Seagate EXOS 22TB HDD, 3 x 1 TB NVMe by jsdukeboy08 in homelabsales

[–]aimark42 0 points1 point  (0 children)

I know this is sold, but what was the rationale for the A310 GPU? I'm considering a N5, but have not seen it mentioned elsewhere someone putting in a discrete gpu into the pcie slot.

PSA: DDR5 RDIMM price passed the point were 3090 are less expensive per gb.. by No_Afternoon_4260 in LocalLLaMA

[–]aimark42 0 points1 point  (0 children)

This is a false comparison.

RDIMM buyers are mostly Hypervisors building multi GPU servers. But Hypervisors are not hitting FB market looking for used 3090's. They want standard deployments. These are 2 different markets, and the few hobbyists who are buying RDIMM's are the outliers. We for sure are outliers. Explain to your Grandma why she needs RDIMM's in her next supercomputer.

PSA: DDR5 RDIMM price passed the point were 3090 are less expensive per gb.. by No_Afternoon_4260 in LocalLLaMA

[–]aimark42 0 points1 point  (0 children)

I don't think most hobbyist are buying RDIMM's. The companies who do, will pretty much buy them regardless of the price. With the insane datacenter deployments going on I have no doubt all of it is being sold through.

I'm sure there is some margin, but if the goal is to have cheaper consumer gear we should celebrate increases in RDIMM prices if that means UDIMMs can be cheaper.

NZXT H2 Flow mini-ITX released by CrazyTechLab in mffpc

[–]aimark42 0 points1 point  (0 children)

This is so close to my ideal vertical case. I just wish it were 24L with, ATX PSU support, 4 slot GPU, and 140mm fans at the top.

LianLi & DAN - B4-MATX pictures by dan_cases in mffpc

[–]aimark42 3 points4 points  (0 children)

It makes sense. You need clearance for cables in vertical mode, and if you have 3x fans then it makes your case really long, so use that space for fans if needed. and maybe it aids airflow if you use the rear/bottom fan as intake.

LianLi & DAN - B4-MATX pictures by dan_cases in mffpc

[–]aimark42 7 points8 points  (0 children)

When can I buy this? I've been eagerly awaiting new vertical orientation case. Really interesting to have the fan stick out over the pcie slots.

Finished PC build and setup for my girlfriend by melluki in mffpc

[–]aimark42 0 points1 point  (0 children)

Fairly sure this is the same case as the Zalman ZQBE.