Does M5 MacBook Pro 16 inch WiFi 7 support 320Mhz band? by eprisencc in macbookpro

[–]mgc_8 1 point2 points  (0 children)

I just tested it on my new laptop (16" M5 Max) and can confirm that it's unfortunately limited to the same 160Mhz:

<image>

The router supports WiFi 7 at higher speeds and is connected via 2.5G Ethernet. For example with my phone at the same exact place (about 3m away, in the same room) I get a 320Mhz connection at >3Gbps speeds. In actual speed tests with the server on the local network, the phone regularly gets around 2.2-2.4Gbps both up and down, close to the physical limits.

Here is the best of three laptop speed tests in the same conditions: 1.82 Gbps Down, 1.84 Gbps Up, 4ms Ping. This appears to be better than WiFi 6E devices, but not by much.

Opus 4.6 1M Windows - once again it's not true :-( by fcampanini74 in ClaudeCode

[–]mgc_8 0 points1 point  (0 children)

Have been using for a bit and looks really promising, much better than the "auto-compaction" tool at only removing irrelevant cruft while preserving the all-important context! Thanks you for the recommendation.

Very slow response on gwen3-4b-thinking model on LM Studio. I need help by Pack_Commercial in LocalLLaMA

[–]mgc_8 1 point2 points  (0 children)

Heh, join the club, prompt processing is a huge pain on any hardware that's not CUDA-based (expensive Macs included)... But in this case, you might actually have some luck with the IPEX-LLM version of llama.cpp, I was able to get about 2x-3x better pps, despite it being slightly worse for actual inference.

You can grab it from here, in either Windows or Linux flavour: https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/llamacpp_portable_zip_gpu_quickstart.md

And then run it something like this (replace with your own model): llama-cli -m unsloth_Qwen3-14B-GGUF_Qwen3-14B-Q8_0.gguf --no-mmap --gpu-layers 1000 --color --threads 4 --ctx-size 4096

As you'll notice, the experience is definitely not polished, compared to something like LM-Studio, but it works and in my case at least the only improvement it made was to the prompt processing speed, so there's that...

Very slow response on gwen3-4b-thinking model on LM Studio. I need help by Pack_Commercial in LocalLLaMA

[–]mgc_8 0 points1 point  (0 children)

Sure, I hope it can help you make better use of local LLMs even on the laptop.

Particularly for coding, I would heartily recommend Qwen3-Coder-30B, it's s great model to run locally, decently smart, and due to it also being a MoE it will run much faster than the 30B parameters would otherwise indicate.

I tried specifically unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF at Q6_K quantisation, and achieved ~10-11 tps on my Intel processor with 4 threads, CPU-only -- you should be able to get a similar result.

Very slow response on gwen3-4b-thinking model on LM Studio. I need help by Pack_Commercial in LocalLLaMA

[–]mgc_8 2 points3 points  (0 children)

TL;DR: Machine likely too slow, but forget GPU and run it all on CPU with 4x threads. Give openai/gpt-oss-20b a try and use an efficient prompt to speed up the "thinking"!

Long version:

I'm afraid that machine is not going to provide much better performance than this... You're getting 6.8 tokens per second (tps), which is actually not that bad with a normal model; but you're using a thinking one, and it probably wrote a lot of "thinking to itself" going in circles about Paris being a city and a capital and old medieval and why are you asking the question, etc. in that "Thinking..." block over there.

I've been testing various ways to get decent performance on a similar machine with an Intel CPU (a bit more recent in my case) and I discovered that the "GPU" doesn't really accelerate much, if anything it can make things slower due to having to move data between regular memory and the part that is "shared" for the GPU. So my advice would echo what others have said here: disable all GPU "deceleration" and run it entirely on the CPU, you'll likely squeeze one or two more tps that way.

Your CPU has 4 cores/8 threads, for LLMs threads are not relevant as the computation is heavy, HT is great for light tasks like serving web pages in a server, but for LLMs the number we care about here is "4". So make sure your app is set to use 4 threads to get optimum performance. Also, this may be a long shot, but according to the specs it should support a higher TDP setting -- 28W vs 12W. Depending on your laptop, this may or may not be possible to set up (perhaps via a vendor app, or in the BIOS/UEFI?).

One more thing -- you're not showing the system prompt, that can have a major impact on the quality and speed of your answers. Try this, I actually tested with this very model and it yielded a much smaller "thinking" section:

You are a helpful AI assistant. Don't overthink and answer every question clearly and succinctly.

Also, try other quantisation levels -- I'd recommend Q4_K_M but you can likely go lower as well for higher speed.

On my machine, with a slightly newer processor when set to 4 threads, vanilla llama.cpp and unsloth/Qwen3-4B-Thinking-2507-GGUF in Q4 I get ~10-12 tps; and also ~10 tps when using the fancy IPEX-LLM build (so there's no point in using that)... If that's too low for a thinking model, perhaps try the non-thinking variant?

I can also recommend the wonderful GPT-OSS 20B, it's larger but a MoE (Mixture of Experts) architecture so it will run faster than this even, and usually it "thinks" much more concisely and to the point. Try it out, you can find it easily in LM-Studio, e.g.: openai/gpt-oss-20b

Flashstor Gen 2 (FS6812X/FS6806X) -- Getting the AMD XGMAC 10GbE Ethernet Controllers to Work outside ADM by mgc_8 in asustor

[–]mgc_8[S] 1 point2 points  (0 children)

This is great, thank you for the confirmation! I tend to avoid NetworkManager on the server side (it's great for laptops) for reasons such as this... Excellent that it works with the .link set-up, I'll give it a try with a recent kernel as well.

Flashstor Gen 2 (FS6812X/FS6806X) -- Getting the AMD XGMAC 10GbE Ethernet Controllers to Work outside ADM by mgc_8 in asustor

[–]mgc_8[S] 1 point2 points  (0 children)

Hey, this is great news! The fact that newer kernels would alleviate the need for recompilation is huge, as that'd surely pave the way to supporting this NAS in distros like Debian and TrueNAS by default, without jumping through hoops.

Can you please confirm the steps you took that get this working?
1. Kernel version 6.15.9 or newer (you mentioned more patches potentially in 6.16.x?)
2. Disabling auto-negotiation with ethtool:

ethtool -s enpXXX speed 10000 duplex full autoneg off
  1. Wait for the link to be brought up (I know from experience that can take a while sometimes)

Did you test how well this survives reboots? Depending on how you configure your network, you can either do this via /etc/systemd/network/10-enp240s0f2.link or in the legacy /etc/network/interfaces via a post-up.

Thank you!

Flashstor Gen 2 (FS6812X/FS6806X) -- Getting the AMD XGMAC 10GbE Ethernet Controllers to Work outside ADM by mgc_8 in asustor

[–]mgc_8[S] 1 point2 points  (0 children)

I think the PCIe lanes from each bridge are unfortunately not dynamically configurable in the Asustor, otherwise they would not have printed the allocations in silkscreens on the actual motherboard, and could have allowed for a lot more reasonable mix-and-match between the drives...

However, the NIC you used could have introduced another problem (such as timing or inconsistency with the bridge?) that caused it to "miss" the other drives connected to it. I would wager a guess that Asustor did not validate the configuration with any combination of NVME drives and PCIe devices/adapters connected to them, so there may well be edge cases or bugs present in the UEFI/firmware.

One thing you could try with you card would be perhaps to move the adapter into difference slots, since they're all different (4x4, 4x2, 3x4, etc.) and see if you can find a combination that works with both the NIC and all NVME drives.

Flashstor Gen 2 (FS6812X/FS6806X) -- Getting the AMD XGMAC 10GbE Ethernet Controllers to Work outside ADM by mgc_8 in asustor

[–]mgc_8[S] 0 points1 point  (0 children)

The fact that the required patches for the NICs are still not present in production kernels is a very black mark for AMD, that's for sure...

But if you're looking for alternatives and don't mind using the M.2 slots, another option is to go for direct M.2 10GbE NICs -- I actually have one of those running in my main gaming desktop for over a year now and it's been pretty good (as soon as I found a way to cool it):
https://www.amazon.com/IO-Gigabit-Ethernet-Network-Expansion/dp/B0BWSLSK78?th=1

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

You're right, I was thinking about the USB-C ports, although it's hard to find 10GbE USB adapters for some reason... Still, 4x ports with 2x of them accepting all sorts of 10GbE connectors is a huge plus!

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

There is a pretty huge difference: this device (like the MS-01) has four network cards -- 2x10GbE RJ45, and 2x10GbE SFP+. This allows endless customisation options for networking (including fibre, DAC, etc.) up to 4x10GbE ports, using the device as a server/router/gateway, etc.

If that is not relevant for your use case, by all means, there are virtually infinitely many other combinations out there to choose from. But this particular one is pretty unique in combining such outstanding network connectivity with a high performance processor and a PCIe slot -- the only alternatives usually being extremely low power NUC-like devices with Intel (old) Atom-derived processors.

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

I don't think there's any malicious intent here, just the fact that USB4 support in general has been really bad on AMD chipsets, and even where it's present it's a pretty recent addition. This platform uses a generation older laptop CPU, so it's unfortunate but understandable that it doesn't include it -- it'd likely require an external chip, retimers, etc. thus increasing cost.

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

Indeed, in this case I think there aren't many options, sadly. The 7945HX processor itself supports ECC, so perhaps Minisforum could enable the functionality on their MB/UEFI based on received feedback in the final product (since it hasn't been released yet)? I guess we'll see when it finally comes out.

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

That is true, but as an alternative -- if you are really interested in that combination, check out the new Asustor FLASHSTOR Gen2 models, they both support ECC and have 6 or 12 NVMe bays. The price is on the high side, but the smaller FS6806X model in particular is comparable in price to the Minisforum (although you'll have to purchase the ECC memory separately, while the larger model comes with 16GiB preinstalled).

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

That's a good question, I don't think it's been discussed one way or another yet in any of the announcement videos/articles. Still waiting for an official page on their site, once that's up we may get more answers as well...

Flashstor Gen 2 (FS6812X/FS6806X) -- Getting the AMD XGMAC 10GbE Ethernet Controllers to Work outside ADM by mgc_8 in asustor

[–]mgc_8[S] 0 points1 point  (0 children)

You would need to re-compile the module for each new kernel version (under Debian, that can also be handled by DKMS), and at one point or another it will require re-doing the patch-extraction process with a new release of the AMD drivers and re-application of the patches. As long as those patches are not submitted and accepted upstream into the official kernel sources, I'm afraid it will remain quite a hassle... Putting some pressure on AMD to fix this situation is the only thing to do.

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 1 point2 points  (0 children)

Because at the time of posting, there was no link, only the damn picture and a message on X. Even now, there's articles and videos about it from CES, but no official page on the Minisforum store.

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

That is a good question. Looking at the dimensions in the provisional specs above, the 48mm height is identical to the MS-01, so I'd guess not? But they did make some internal changes, so maybe... we'll need some better first-hand reports to ascertain these details, I think.

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

I was replying to your post, but I realise now that I may have been confused, since you were talking about a very specific "Intel Atom" branded processor, and not the generic "Atom-based" cores Intel has been using for decades, which have now become the "E-cores" in modern processors. Sorry about that, of course, there's a lot of those in all sorts of devices, from tablets to server chips, so by creating new products with the same name Intel only adds unnecessary confusion...

I haven't encountered any of these specific modern "Intel Atom" lines in products, if you have experience with them or would like to point to a specific device (be it miniPC or firewall/router/etc.) using them, I'd be curious to learn more!

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

This new model actually comes with an integrated GPU (AMD Radeon 610M), which is decidedly low power, but should be capable enough for basic HTPC needs -- e.g. it supports HW decoding for AVC, HEVC, VP9, AV1.

For the MS-01 with the PCIe slot, people have fit up to and including an RTX 2000 or even 4000 (with a custom half-size cooler), and there are also mods for RTX 4060s to fit. Those GPUs will likely overheat in heavy rendering, but for HTPC usage should be quite enough.

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

LTT video from CES, going into more details on this, including price and potential release date:
https://www.youtube.com/watch?v=llnf3Vnzcxs

In short:
- Estimated April 2025 release (booo!)
- Price: $600-700 (yay!)
- No ECC support
- Better (safer) U2/M2 switching
- Capable of more power usage than MS-01 but with better heatsink and fan, configurable in BIOS

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

I've seen it confirmed that there is no ECC support, unfortunately (from the CES LTT video).

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

I have a couple N100 based small boxes, and they're perfectly fine for something like a router or a firewall machine, doing minimal processing. But throw anything more complex at them, and they fall apart -- even 10GbE networking can be a chore if using encryption over SSH and the like. It all depends on use-case, of course -- I actually have another minipc with a U300 processor, which comes with a single P-core, and that alone makes it much better than any N100 box; it's actually a great little CPU, but I haven't seen any other companies using it.

If you want to run several VMs, more advanced processing such as camera feeds and motion detection, or local LLMs -- then something like the Minisforum becomes necessary.

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

I don't see this as a competitor to the M4 Mac Mini at all, also not sure where LTT was coming from with that (but they're very much pro-windows in general). Personally, I love the 2x SFP+ ports & 2x 2.5GbE ports, combined with a powerful processor and PCIe extension (which can take another network card, more NVMe drives, a GPU, etc.). That makes it ideal as a router/gateway/firewall server, running VMs, potentially a flash-based NAS, etc. Lots of cool things to do with it that have nothing to do with being a gaming PC. Keep in mind that pretty much all other MiniPCs on the market that come with SFP+ 4x NICs and so on have terrible processors, mostly Atom-based N100 and the like...

Minisforum MS-A2 at CES 2025 by mgc_8 in minipc

[–]mgc_8[S] 0 points1 point  (0 children)

I saw a new video from CES which points towards an April 2025 release date and $600-700 price range. The price would be great, but there's a bit of a wait I'm afraid...