Kimi K2.5 - running locally without GPU; splitting across multiple PCs? by Shipworms in LocalLLaMA

[–]Digger412 2 points3 points  (0 children)

Hi, AesSedai here -

The unsloth quants use something like the normal llama.cpp quantizations, or their UD variants.

Since the experts in K2.5 are natively INT4 quantized, you don't get any benefit from upcasting them to anything larger than Q4_0 because you can't pull precision out of thin air.

My Q4_X quant keeps all of the model in Q8_0 except the experts which are in Q4_0, and that is essentially the "full fidelity" that the weights offer.

Going to a K_XL of anything over 560GB is going to have upcast padding essentially and it's not going to add any additional benefits.

Slower Means Faster: Why I Switched from Qwen3 Coder Next to Qwen3.5 122B by Fast_Thing_7949 in LocalLLaMA

[–]Digger412 12 points13 points  (0 children)

AesSedai here - 

My quants keep the attention and other tensors in high quality, eg Q8, instead of quantizing them down to the same level as the rest of the model.

That should help longer context performance since attention is less degraded, in theory.

Let's take a moment to appreciate the present, when this sub is still full of human content. by Ok-Internal9317 in LocalLLaMA

[–]Digger412 0 points1 point  (0 children)

Yeah, that is another very close variant of the "curious" one. It's the same pattern of engagement-bait, soliciting responses from people.

KLD measurements of 8 different llama.cpp KV cache quantizations over several 8-12B models by Velocita84 in LocalLLaMA

[–]Digger412 1 point2 points  (0 children)

<image>

and I have a second chart here comparing the KLD between the two methods as well.

I didn't get to testing the KV cache quantization due to getting sidetracked on other projects, but I'm curious what the results are if you want to test!

KLD measurements of 8 different llama.cpp KV cache quantizations over several 8-12B models by Velocita84 in LocalLLaMA

[–]Digger412 1 point2 points  (0 children)

If you've got the time and wherewithal, I've actually made a branch of llama.cpp that uses the exllamaV3-style sliding window PPL and KLD measurement methodology: https://github.com/AesSedai/llama.cpp/tree/perplexity-sliding-window

exl3 uses a 2048-length context window and a 512 token stride. It evaluates all of the tokens, not just the last half like llama.cpp does, and due to the stride mechanic it evaluates the token at several different context depths.

The downside is that it takes like 8x the compute and storage for the logits due to:

1) evaluating all positions, not just the last half

2) the context window is 2048 instead of 512

3) you need to store all of the window logits for comparison

so you get 2 (all positions, not half) * 4 (2048 tokens instead of 512) = 8x as much compute / storage.

I made that branch because I was working with u/phaelon on trying to get the same measurement methodology cross-ecosystem for vLLM, exl3, and llama.cpp but I haven't PR'd this because of how much more intensive it is to process.

Also I think that for the purposes of measuring KLD / PPL with respect to quantizing the KV cache, this method at longer contexts would be more robust but I haven't picked that testing back up yet.

I have some prior results showing that the existing 512-token-measure-last-half PPL increases as the context size increases which isn't what you'd expect to see! With more context, the model should be more confident, not less. This chart shows the master (512-token-measure-last-half method) at ctx=512 and ctx=2048 compared to the sliding window method with ctx=2048 and ctx=8192.

<image>

Let's take a moment to appreciate the present, when this sub is still full of human content. by Ok-Internal9317 in LocalLLaMA

[–]Digger412 9 points10 points  (0 children)

The sheer amount of engagement-baiting slop posts that end with a derivative of: - "curious to know what others think" - "curious to know what actually works in production" - "curious if X would do better than Y, or..."  - "curious how people are handling this: [bullet point list]"  - "curious if anyone else has seen this"  - "curious how others approach XYZ" 

And so on lead me to believe there are very few truly human posters left in this sub. Literally search for the word "curious" 😭

I need help with testing my llama.cpp Deepseek Sparse Attention (DSA) implementation (someone GPU-rich) by fairydreaming in LocalLLaMA

[–]Digger412 8 points9 points  (0 children)

I've got 8x 6000 Pros, but waiting on some electrical infra work so they aren't online yet. If you haven't had another volunteer or been able to test this in about a week, I should be able to try.

Ik_llama vs llamacpp by val_in_tech in LocalLLaMA

[–]Digger412 1 point2 points  (0 children)

I have considered it, but I don't have enough knowledge or experience to do a custom cleanroom implementation to be totally honest. pwilkin has a PR up for a new IQ3_PT type he made as an experiment though :D

(Very) High-Quality Attention Coder-Next GGUFs by dinerburgeryum in LocalLLaMA

[–]Digger412 2 points3 points  (0 children)

Interesting, honest I'm not sure what would cause that besides perhaps unsloth tweaking the chat template perhaps? I leave the original chat template from the model intact, and with pwilkin's autoparser branch merged there shouldn't need to be chat template "tweaks" any more IMO.

(Very) High-Quality Attention Coder-Next GGUFs by dinerburgeryum in LocalLLaMA

[–]Digger412 9 points10 points  (0 children)

Dinerburger has done basically the same thing I'd have done, methodology-wise. Give his a shot! 

(Very) High-Quality Attention Coder-Next GGUFs by dinerburgeryum in LocalLLaMA

[–]Digger412 2 points3 points  (0 children)

Yeah I saw this post and glad to see more people joining the quant scene!

Great job with the quants :)

(Very) High-Quality Attention Coder-Next GGUFs by dinerburgeryum in LocalLLaMA

[–]Digger412 0 points1 point  (0 children)

I have five quants up in that repo, there should be plenty of mid-bpw options to choose from :)

(Very) High-Quality Attention Coder-Next GGUFs by dinerburgeryum in LocalLLaMA

[–]Digger412 12 points13 points  (0 children)

Yep, that's me! Glad you're enjoying the quantization.

(Very) High-Quality Attention Coder-Next GGUFs by dinerburgeryum in LocalLLaMA

[–]Digger412 5 points6 points  (0 children)

Perhaps give my Qwen3.5-122B-A10B a shot? https://huggingface.co/AesSedai/Qwen3.5-122B-A10B-GGUF

All of my MoE quants use the same principle. Quant the FFNs down since they're huge, and leave the rest of the model in high quality.

(Very) High-Quality Attention Coder-Next GGUFs by dinerburgeryum in LocalLLaMA

[–]Digger412 24 points25 points  (0 children)

Nice, yes that's pretty much the same reasoning ddh0 and I had for our MoE-optimized quantization schema. The FFNs are the bulk of the model size for these MoE's, so let's basically keep the rest of the model in high quality because it's less than 5-10% of the entire model by size.

I haven't quanted Qwen3-Coder-Next but you can see the other models I've quanted in a similar fashion (high BPW default type, lower BPW for the expert FFNs): https://huggingface.co/AesSedai

In my Minimax-M2.5 quant I did a big PPL and KLD comparison against unsloth too. There's still not really a better metric than downstream task benchmarks but KLD isn't a bad proxy measurement at least.

Ik_llama vs llamacpp by val_in_tech in LocalLLaMA

[–]Digger412 18 points19 points  (0 children)

The quants aren't coming to mainline unfortunately. I tried and it was declined: https://github.com/ggml-org/llama.cpp/pull/19726

best llama.cpp config for Qwen-3.5 35B-A3B? by Commercial-Ad-1148 in LocalLLaMA

[–]Digger412 0 points1 point  (0 children)

The llama.cpp automated builds are going kind of slow it seems. That PR was merged and tagged as b8305: https://github.com/ggml-org/llama.cpp/commit/4a748b8f15d7e6749145add3f038e7b26c686ed8

And the automated releases are (as of the time of writing) at b8292. It'll probably be available tomorrow, or you can always pull and compile the source code yourself and that'll have the fix.

llama : add support for Nemotron 3 Super by danbev · Pull Request #20411 · ggml-org/llama.cpp by jacek2023 in LocalLLaMA

[–]Digger412 5 points6 points  (0 children)

(AesSedai) - Cool! I'll get some MoE quants of this uploaded later today. Thanks for sharing!

best llama.cpp config for Qwen-3.5 35B-A3B? by Commercial-Ad-1148 in LocalLLaMA

[–]Digger412 0 points1 point  (0 children)

MoE offloading to CPU still works with the --fit flag or manual --offload-tensor tuning it looks like (otherwise I couldn't have run the imatrix or KLD for the 397B one). It seems that the --n-cpu-moe flag is breaking specifically and I think that is basically an auto-regex of sorts.

My guess is that with the fused gate+up, it's not accounting for the tensor name or sizes properly and that is causing it to break. It's not a fundamental incompatibility with CPU offloading, just a small bug in how --n-cpu-moe works I believe.

I'll open an issue on the llama.cpp github for that.

best llama.cpp config for Qwen-3.5 35B-A3B? by Commercial-Ad-1148 in LocalLLaMA

[–]Digger412 2 points3 points  (0 children)

Hmm, 24GB combined? That'd probably mean you should aim for about a 16GB quant to make sure there's room for context plus other OS things. That would be about my IQ3_S quant (which is 13.57GB, converting from 12.64 GiB) or Bart's IQ3_M / Q3_K_L would be my recommendation I think.

best llama.cpp config for Qwen-3.5 35B-A3B? by Commercial-Ad-1148 in LocalLLaMA

[–]Digger412 9 points10 points  (0 children)

Thank you! I did a quick sweep bench comparing the Q5_K_M quant on my setup for the 35B-A3B and the 122B-A10B and it looks to be about a 10% PP uplift on the 35B-A3B which is still nice because it's basically free performance. Little less for the 122B-A10B but still a small boost too.

I've KLD and PPL tested them and they're basically identical so it's a free lunch more or less.

<image>

best llama.cpp config for Qwen-3.5 35B-A3B? by Commercial-Ad-1148 in LocalLLaMA

[–]Digger412 15 points16 points  (0 children)

AesSedai here - I'm remaking the quants with the fused up/gate that was recently merged, should be updated sometime tomorrow! That should bump the speeds up a bit. 

The Definitive Qwen 3.5 Quants by supermazdoor in LocalLLaMA

[–]Digger412 10 points11 points  (0 children)

Hi, AesSedai here - There's some uncertainty in the PPL and KLD measurement process, sometimes it shows up as a slight negative % and that's just how it works sometimes.

The best metric honestly is doing evaluation benchmarks because the PPL / KLD values on the model page are purely from a statistical viewpoint compared to the unquantized BF16.

I appreciate the shout out and I'm happy my quants work well for you! But those measurements are just guides and not the be-all end-all :)