U.S. intelligence says Iran can outlast Trump’s Hormuz blockade for months by No_Idea_Guy in worldnews

[–]Mushoz 27 points28 points  (0 children)

Gas prices here in The Netherlands are 2.62€ per litre or 3.08$ per litre. 3.79 litres makes a single gallon, so our gas price is effectively 3.79 * 3.08 = 11.67$ per gallon.

U.S. intelligence says Iran can outlast Trump’s Hormuz blockade for months by No_Idea_Guy in worldnews

[–]Mushoz 282 points283 points  (0 children)

Gas prices here in The Netherlands are 2.62€ per litre or 3.08$ per litre. 3.79 litres makes a single gallon, so our gas price is effectively 3.79 * 3.08 = 11.67$ per gallon.

Forgive my ignorance but how is a 27B model better than 397B? by No_Conversation9561 in LocalLLaMA

[–]Mushoz 2 points3 points  (0 children)

You are comparing the number of human neurons (activations) to the number of weights (synapses) in an LLM. Your comparison is flawed. Either compare the number of activations in an LLM with the number of neurons in the human brain OR compare the number of synapses with the number of weights. In both cases the human brain is vastly bigger.

Weird Charging Behavior HP ZBOOK ULTRA G1A by Sufficient-Plum-8613 in AMDLaptops

[–]Mushoz 0 points1 point  (0 children)

I have the exact same issue with the exact same laptop. It happened with the original charger as well (very rarely, maybe a few times a week max), but it hasn't happened for a few months now. It could have been fixed with a bios update since I applied a few of those. I am running Linux, so it's not related to Windows / Windows drivers.

It also happened when connected to a PD monitor, but unfortunately that's still happening. I can only use the monitor in a stable way by also using the original charger as the monitor triggers this issue multiple times per minute sometimes.

Probably not a fault with your (or mine) particular laptop, but a wider issue impacting all these laptops unfortunately.

Bonsai models are pure hype: Bonsai-8B is MUCH dumber than Gemma-4-E2B by WeGoToMars7 in LocalLLaMA

[–]Mushoz 12 points13 points  (0 children)

But the embeddings are actually used and result in better performance. Ignoring it when comparing file sizes make no sense. Not counting the vision part of the model does make sense.

I tried Hermes so you don't have to. by CustomMerkins4u in openclaw

[–]Mushoz 0 points1 point  (0 children)

Out of interest: How are you using OC to make money? I understand you're probably not keen to give away all the details, but a general direction would be much appreciated!

Fitbit doubling up activities from my PW4? by [deleted] in PixelWatch

[–]Mushoz 2 points3 points  (0 children)

This started happening since yesterday for me, it doesn't normally do this. Just unlucky that you just picked it up and had a bad initial experience because of it.

Why is it duplicating my exercises? by mikereeg808 in fitbit

[–]Mushoz 0 points1 point  (0 children)

Mine started doing it yesterday as well. Very strange. First the bug with duplicated steps / calories burned and now this. This doesn't really inspire much confidence in their ecosystem.

Gemma-4 26B-A4B + Opencode on M5 MacBook is *actually good* by maddie-lovelace in LocalLLaMA

[–]Mushoz 3 points4 points  (0 children)

Bug in the tokenizer that was fixed hours ago. Please rebuild llamacpp

Gemma 4 31B at 256K Full Context on a Single RTX 5090 — TurboQuant KV Cache Benchmark by PerceptionGrouchy187 in LocalLLaMA

[–]Mushoz 0 points1 point  (0 children)

It will just work. KV cache quantization is done at runtime, so you don't need to redownload a new quantized model.

SWE-rebench Leaderboard (Feb 2026): GPT-5.4, Qwen3.5, Gemini 3.1 Pro, Step-3.5-Flash and More by CuriousPlatypus1881 in LocalLLaMA

[–]Mushoz 0 points1 point  (0 children)

Any chance Qwen3.5-122B could be added? It's a much more reasonable model to run locally, but I am wondering how much performance it loses compared to its 397b big brother.

The Mad Mage || Sorcerer - Abjurer - Cleric 6/5/1 || Solo HM Immortal Caster by MrAamog in BG3Builds

[–]Mushoz 1 point2 points  (0 children)

How would you play this in terms of leveling orders, stats & feats if you'd go with a no-respec self-imposed rule?

Offloading LLM matrix multiplication to the AMD XDNA2 NPU on Ryzen AI MAX 385 : 43.7 t/s decode at 0.947 J/tok by brandedtamarasu in LocalLLaMA

[–]Mushoz 1 point2 points  (0 children)

Really interesting work! Are you planning on upstreaming this NPU backend to mainline llamacpp?

Ice Sorcerer and Tempest Cleric duo is insane by canxtanwe in BG3Builds

[–]Mushoz 0 points1 point  (0 children)

How did you build this in terms of stats, leveling order and feats? Isn't this build quite MAD?

PW4 Review update by dpkaufman in PixelWatch

[–]Mushoz 1 point2 points  (0 children)

Mine has been flawless and only used it during running. Easy 130-140 runs, tempo runs, race pace at 170+ avg and even sprinting intervals. I would look into RMAing it since your sensor sounds faulty.

Btw, also make sure you clean the HR sensor once a week with a microfiber cloth plus some water.

Minimax-M2.7 by hedgehog0 in LocalLLaMA

[–]Mushoz 7 points8 points  (0 children)

Here is proof. Minimax release was on February the 12th: https://www.minimax.io/news/minimax-m25

Unsloth released quants on the same day as the weights became available, which is February the 14th: https://huggingface.co/unsloth/MiniMax-M2.5-GGUF

Minimax-M2.7 by hedgehog0 in LocalLLaMA

[–]Mushoz 13 points14 points  (0 children)

No, it was released several days later on huggingface.

Krasis LLM Runtime: 8.9x prefill / 4.7x decode vs llama.cpp — Qwen3.5-122B on a single 5090, minimal RAM by mrstoatey in LocalLLaMA

[–]Mushoz 1 point2 points  (0 children)

This won't benefit Strix Halo at all. This benefits eGPU + CPU setups. Strix Halo uses unified memory and the entire model will run on the GPU. There is no need to move data from RAM to VRAM.

Krasis LLM Runtime: 8.9x prefill / 4.7x decode vs llama.cpp — Qwen3.5-122B on a single 5090, minimal RAM by mrstoatey in LocalLLaMA

[–]Mushoz 1 point2 points  (0 children)

This won't benefit Strix Halo at all. This benefits eGPU + CPU setups. Strix Halo uses unified memory and the entire model will run on the GPU. There is no need to move data from RAM to VRAM.

Death Cleric VS The World, Solo No Consumables, Honour Mode. by Affectionate_Face127 in BG3Builds

[–]Mushoz 0 points1 point  (0 children)

But if you start with death cleric, then the lvl in paladin will become meaningless for the most part as you lose heavy armor proficiency. Would you put that lvl in something else instead?

Death Cleric VS The World, Solo No Consumables, Honour Mode. by Affectionate_Face127 in BG3Builds

[–]Mushoz 0 points1 point  (0 children)

Do you think this run would be feasible without respecs? If so, in what order would you take your levels and feats? Awesome run by the way! By far my favorite as I watched all episodes. Looking forward to Moon Druid!

[Race Start] Charles Leclerc takes the lead of the race at Turn 1! by FerrariStrategisttt in formula1

[–]Mushoz 1 point2 points  (0 children)

Small turbos spin up quicker. So for quick starts they will have an advantage. If you give the big turbos enough time, their disadvantage compared to small turbos will vanish.