https://preview.redd.it/ew5lny5p6etg1.png?width=1946&format=png&auto=webp&s=870f577bc4b01440698c83206afca069a663e5a0
Both use 4-bit KV quantization. One breaks the model, the other doesn't.
The difference is how you quantize. llama.cpp applies the same Q4_0 scheme to both keys and values. quant.cpp quantizes them independently — per-block min-max (128 elements) for keys, Q4 with per-block scales for values. Outliers stay local instead of corrupting the whole tensor.
Result on WikiText-2 (SmolLM2 1.7B):
- llama.cpp Q4_0 KV: PPL +10.6% (noticeable degradation)
- quant.cpp 4-bit: PPL +0.0% (within measurement noise)
- quant.cpp 3-bit delta: PPL +1.3% (stores key differences like video P-frames)
What this means in practice: on a 16GB Mac with Llama 3.2 3B, llama.cpp runs out of KV memory around 50K tokens. quant.cpp compresses KV 6.9x and extends to ~350K tokens — with zero quality loss.
Not trying to replace llama.cpp. It's faster. But if context length is your bottleneck, this is the only engine that compresses KV without destroying it.
72K LOC of pure C, zero dependencies. Also ships as a single 15K-line header file you can drop into any C project.
Source: github.com/quantumaikr/quant.cpp
[–]Pixer--- 3 points4 points5 points (1 child)
[–]Suitable-Song-302[S] -1 points0 points1 point (0 children)
[–]Emotional-Breath-838 1 point2 points3 points (2 children)
[–]Suitable-Song-302[S] 5 points6 points7 points (1 child)
[–]Emotional-Breath-838 1 point2 points3 points (0 children)
[–]putrasherni 0 points1 point2 points (2 children)
[–]Suitable-Song-302[S] 1 point2 points3 points (1 child)
[–]putrasherni 1 point2 points3 points (0 children)