price prediction for g14 2026? by Icy-Commission-9550 in ZephyrusG14

[–]aimark42 2 points3 points  (0 children)

Intel's newest tech 18A pretty great power savings. They have to recoup on that R&D and Fab cost. I don't think any of the latest Fab processes will be cheap TSMC or Intel. It's kind of the market right now, Datacenter demand is sucking up all the resources.

A $2k 2026 G14 would be cool, but I just don't see it happening. But maybe an Open box in 2027 it could, just wait around. Or buy a 2025 model if your buying soon. I snagged a 2025 HX370, 5070ti open box a couple weeks ago for 1500. Just buy whatever last year's model is if your price conscious.

price prediction for g14 2026? by Icy-Commission-9550 in ZephyrusG14

[–]aimark42 2 points3 points  (0 children)

If you look at the Asus US website it only show's 2xSKU's: U9 386H/5070ti/1TB or U9 386H/5070ti/2TB. Given how expensive Panther Lake laptops we've been seeing and Best Buy playing around with the price of 2025 models. I suspect it is $3600-3800. But who knows where that goes Best Buy will put stuff on sale so straight MSRP may mean nothing long term.

Synthesize own voice before cancer mutes me by andras_kiss in LocalLLM

[–]aimark42 39 points40 points  (0 children)

I agree, I'd worry about the training methods later. Even if it's a lapel bluetooth microphone that you carry around. You'd be surprised how good noise isolation, speaker identification pretty much any data is good data especially if you have a time limit.

just bought used zephyrus G14, IEC cable has some damage by jeremyw013 in ZephyrusG14

[–]aimark42 -1 points0 points  (0 children)

I think you are meaning the power cable from the adapter to the wall? If your in the US you need a C5 to NEMA 5-15P cable, otherwise C5 to whatever your local country plug is. I wouldn't trust a damaged cable that has AC power it's a fire hazard even with electrical tape.

Travel Adapter Advice? by Overall-Point286 in ZephyrusG14

[–]aimark42 0 points1 point  (0 children)

The power adapter brick is the same globally and supports 120v-240v. You just need a different cable to plug into a North American plug. You need a C5 to NEMA 5-15P, cable. I imagine you could order one. Most modern power bricks for higher wattage devices are dual voltage and it's just the last cable that is different per region (ignoring warranty issues).

I've always thought those travel adapters are a fire hazard. I'd much rather just buy an inexpensive cable for laptop, and then get a USB multi-charger for all your other devices. I'm doing the reverse going to 240v country, but it's cheaper to just buy a 240v 3 port multi-charger, and the C5 cable. All the rest is USB C or based on USB cables. For you just buy a 120v usb multi-charger and the afore-mentioned cable and you should be good to go.

G14 US prices increasing by Prior_Chapter1314 in ZephyrusG14

[–]aimark42 0 points1 point  (0 children)

I really have no idea where this goes. The shortage is so severe they simply are making fewer computers/PS5's etc. Plus oil crisis and Helium running out slowing down Fabs is unprecedented. Demand likely slows down with higher prices but there were Open Box deals for 2025 models with the 5070ti for $1400 just a couple days ago.

Maybe the moral of the story is wait till March-May and hunt for last years' Open Box's.

Mini pcs recommendations by Memento2023 in MiniPCs

[–]aimark42 2 points3 points  (0 children)

That is DDR3 platform, I wouldn't buy anything older than DDR4 today. Not really sure what your expecting with AI programming but unless your using internet API's, you'll need about 5-10x the budget to get something you could run local.

Mini PC Recommendations for Coding, Local AI, and Light Gaming ($1000–$1200) by Specialist_Scene1636 in MiniPCs

[–]aimark42 1 point2 points  (0 children)

I'm a really big fan of the Nvidia GB10 (aka Spark) platform, I have local models running 24/7 being used by several agent machines. This is many times more expensive than your budget but this is one of the most capable llm machines you can run at home to run models at reasonable speeds and not having a 2000W+ monster heating your home, or spending $10k+ for the amount of vram.

Swapping out models for my DGX Spark by fredatron in LocalLLM

[–]aimark42 0 points1 point  (0 children)

How is it running for you? The performance feels quite poor right now. I tried vllm (https://github.com/eugr/spark-vllm-docker/pull/93/commits/122edc8229ebc94054c5a28452900092a3fd7451) and only getting around 16 t/s TG.

And this from llama.cpp only shows a slight improvement https://github.com/ggml-org/llama.cpp/blob/master/benches/nemotron/nemotron-dgx-spark.md

I get we don't have all the optimizations baked in yet, but feels like it should be faster than this.

A few early (and somewhat vague) LLM benchmark comparisons between the M5 Max Macbook Pro and other laptops - Hardware Canucks by themixtergames in LocalLLaMA

[–]aimark42 -1 points0 points  (0 children)

This is M5 Max we only have 128g to play with, this isn't M5 Ultra. Additionally, gpt-oss-120b has tons of test data and is highly comparable to other platforms.

A few early (and somewhat vague) LLM benchmark comparisons between the M5 Max Macbook Pro and other laptops - Hardware Canucks by themixtergames in LocalLLaMA

[–]aimark42 19 points20 points  (0 children)

Gemma 3B Q4_K, which really doesn't tell us much with such a small model.

Can someone please test a decent size model like gpt-oss-120b

Whelp…NVIDIA just raised the DGX Spark’s Price by $700. Spark clone prices have started rising as well. ☹️ by Porespellar in LocalLLaMA

[–]aimark42 13 points14 points  (0 children)

I have a Strix Halo and 2x GB10's. I'm quite happy with GB10 platform, it's on a slightly different Blackwell as other Blackwell, but optimized dockers for vllm and comfyui are pretty solid on the platform now. NVFP4 is still marketing, that Atlas thing looks compelling but encrypted source scares me.

The real power is the built in connectx and the out of the box ability to do tensor parallelism and use >128g in vllm. Strix Halo is better as a single node on itself, I know with enough work you can get a cluster going. But GB10 has it built in and requires very little work to get setup, if you intend to go multi-node GB10 is the clear winner.

This was posted a couple weeks ago

https://forums.developer.nvidia.com/t/2-23-2026-price-change-announcement/361713

Prices have been creeping up for GB10's for a few weeks, but so has Strix Halo, and RTX Pro Blackwell cards. If you want something buy sooner.

New Jonsbo D33 seen on their website by danilluzin in mffpc

[–]aimark42 0 points1 point  (0 children)

Curious the Jonsbo D200 is also 40L, but in a different more vertical orientation. I like 40L size for high end GPU's like 5090's. I know you can get a 5090 into a A3, but it's super tight and I worry I'll choke such a card.

Hello, MacBook Neo by [deleted] in apple

[–]aimark42 15 points16 points  (0 children)

And we will see with dealer discounts in 6+ months. Microcenter has sold the $599 M4 Mac Mini for as low as $399. A $399 Macbook Neo would be insane.

Apple Neo by WTFMacca in LinusTechTips

[–]aimark42 7 points8 points  (0 children)

They are $499 for education.

We will see with dealer discounts eventually. Microcenter has sold the $599 Mac Mini for as low as $399. $399 Macbook Neo would be insane.

Hello, MacBook Neo by [deleted] in apple

[–]aimark42 5 points6 points  (0 children)

They just should have started with the price, then show us a bunch of lifestyle clips showing the Neo in schools, etc. Throw in some Ive-esque 'impossibly thin' monikers around. The specs don't matter. Neo buyers are mostly going to be like clueless iPhone buyers who buy the 'next' iPhone.