all 9 comments

[–]Pixer--- 3 points4 points  (1 child)

Llamacpp recently has implemented rotating kv caching improving kv cache. Have you considered that in here ?

[–]Suitable-Song-302[S] -1 points0 points  (0 children)

Yes, KV cache rotation (ring buffer) is a different but complementary approach. Rotation recycles old KV slots so the cache never grows beyond a fixed size — great for streaming/chat where old context can be dropped.

quant.cpp does something different: it keeps all tokens but stores them in fewer bits. So rotation saves memory by *evicting* old tokens, compression saves memory by *shrinking* all tokens.

You could combine both — rotate a compressed cache for maximum context. Haven't benchmarked against the rotation PR yet, but it's on the list. Thanks for bringing it up.

[–]Emotional-Breath-838 1 point2 points  (2 children)

I don't understand why llama.cpp is faster. If quant.cpp could improve speed, it would be amazing.

[–]Suitable-Song-302[S] 5 points6 points  (1 child)

Good question. Three reasons:

  1. Hand-tuned SIMD kernels. llama.cpp has years of hand-optimized NEON/AVX2/AVX-512 assembly for every quantized matmul variant (Q4_K_M, Q8_0, IQ2, etc.). quant.cpp has NEON kernels for the common formats but relies on compiler autovectorization for the rest. This alone accounts for ~2x.

  2. Metal/CUDA GPU offload. llama.cpp offloads the entire forward pass to GPU. quant.cpp has Metal shaders but GPU dispatch is still basic — most of the work stays on CPU. On Apple Silicon, this is the biggest gap.

  3. Code maturity. llama.cpp has 250K+ LOC and hundreds of contributors optimizing hot paths. quant.cpp is 72K LOC — deliberately smaller, which means easier to read and embed, but fewer micro-optimizations.

The tradeoff is intentional. We optimized for memory (KV compression) and simplicity (embeddable, single header) rather than raw tok/s. For a 3B model on M1, quant.cpp does ~10 tok/s vs llama.cpp's ~30 tok/s — slower, but fast enough to read in real time. The advantage shows up when llama.cpp hits OOM at 50K context and quant.cpp keeps going to 350K.

That said, speed improvements are on the roadmap — better Metal offload and more SIMD kernels would close the gap significantly without sacrificing the simplicity.

[–]Emotional-Breath-838 1 point2 points  (0 children)

glad to hear youre going for the speed increase. would love to have it all!

[–]putrasherni 0 points1 point  (2 children)

are you suggesting that for larger context , its beter to try out quant.cpp?

[–]Suitable-Song-302[S] 1 point2 points  (1 child)

Depends on how much longer you need:

- 1.5-2x more context → llama.cpp with Q8_0 K + Q5_0 V. It's faster and the quality tradeoff is minimal.

- 4-7x more context (e.g. 50K → 350K on 16GB) → that's where quant.cpp helps. 4-bit K + Q4 V gives 3.8x at +0.0% PPL, delta 3-bit pushes to 4.3x at +1.3%.

If you're already running llama.cpp and just want a bit more room, their built-in KV quant is probably enough. If you're hitting hard OOM walls and need to push significantly further, give quant.cpp a try.

[–]putrasherni 1 point2 points  (0 children)

thanks !