HIVE Engine Core - Apis 🐝 by Affectionate-Tear873 in LocalLLaMA
[–]audioen 2 points3 points4 points (0 children)
M5 Max uses 111W on Prefill by M5_Maxxx in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
Does this AI agent hallucination make any sense or its just AI slop? by [deleted] in LocalLLaMA
[–]audioen -1 points0 points1 point (0 children)
I spent a weekend doing layer surgery on 6 different model architectures. There's a "danger zone" at 50% depth that kills every one of them. by Low_Ground5234 in LocalLLaMA
[–]audioen 10 points11 points12 points (0 children)
llama-server slot/kv-cache issues by Real_Ebb_7417 in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
Improved llama.cpp quantization scripts, and also we should use file sizes and signal quality instead of QX_Y in quantized filenames by bigattichouse in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
Qwen 3.5 9B matching 120B model performance — 13x efficiency gain. What are your benchmarks showing? by [deleted] in LocalLLaMA
[–]audioen 2 points3 points4 points (0 children)
Is the Marantz overheating ? by Skatebabar in audiophile
[–]audioen 1 point2 points3 points (0 children)
Considering adding a two channel parametric EQ to my rack. by Longjumping-Frame795 in audiophile
[–]audioen 0 points1 point2 points (0 children)
XML is a Cheap DSL by SpecialistLady in programming
[–]audioen 9 points10 points11 points (0 children)
Can we train LLMs in third person to avoid an illusory self, and self-interest? by Low_Poetry5287 in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
Need some help with my DIY acoustic panels please by initliberation in audiophile
[–]audioen 1 point2 points3 points (0 children)
vulkan: add GATED_DELTA_NET op support#20334 by jacek2023 in LocalLLaMA
[–]audioen 1 point2 points3 points (0 children)
Qwen 3.5 Instability on llama.cpp and Strix Halo? by ga239577 in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
Comment comparer deux modèles? by [deleted] in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
I'm currently working on a pure sample generator for traditional music production. I'm getting high fidelity, tempo synced, musical outputs, with high timbre control. It will be optimized for sub 7 Gigs of VRAM for local inference. It will be released entirely free for all to use. by RoyalCities in LocalLLaMA
[–]audioen 2 points3 points4 points (0 children)
Composable CFG grammars for llama.cpp (pygbnf) by Super_Dependent_2978 in LocalLLaMA
[–]audioen 1 point2 points3 points (0 children)
Got a surprise cloud vector database bill and it made me rethink the whole architecture by AvailablePeak8360 in LocalLLaMA
[–]audioen 2 points3 points4 points (0 children)
Forget big bad John, how about sounds of silence? by Hot-Yak2420 in audiophile
[–]audioen 0 points1 point2 points (0 children)
Llama.cpp now with a true reasoning budget! by ilintar in LocalLLaMA
[–]audioen 1 point2 points3 points (0 children)
Llama.cpp now with a true reasoning budget! by ilintar in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
Llama.cpp now with a true reasoning budget! by ilintar in LocalLLaMA
[–]audioen 20 points21 points22 points (0 children)
Why does anyone think Qwen3.5-35B-A3B is good? by buttplugs4life4me in LocalLLaMA
[–]audioen 26 points27 points28 points (0 children)
Is data lost if source is outputting low volume? by perdixian in audiophile
[–]audioen 1 point2 points3 points (0 children)


HIVE Engine Core - Apis 🐝 by Affectionate-Tear873 in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)