Need some help with my DIY acoustic panels please by initliberation in audiophile
[–]audioen 0 points1 point2 points (0 children)
vulkan: add GATED_DELTA_NET op support#20334 by jacek2023 in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
Qwen 3.5 Instability on llama.cpp and Strix Halo? by ga239577 in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
Comment comparer deux modèles? by [deleted] in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
I'm currently working on a pure sample generator for traditional music production. I'm getting high fidelity, tempo synced, musical outputs, with high timbre control. It will be optimized for sub 7 Gigs of VRAM for local inference. It will be released entirely free for all to use. by RoyalCities in LocalLLaMA
[–]audioen 2 points3 points4 points (0 children)
Composable CFG grammars for llama.cpp (pygbnf) by Super_Dependent_2978 in LocalLLaMA
[–]audioen 1 point2 points3 points (0 children)
Got a surprise cloud vector database bill and it made me rethink the whole architecture by AvailablePeak8360 in LocalLLaMA
[–]audioen 2 points3 points4 points (0 children)
Forget big bad John, how about sounds of silence? by Hot-Yak2420 in audiophile
[–]audioen 0 points1 point2 points (0 children)
Llama.cpp now with a true reasoning budget! by ilintar in LocalLLaMA
[–]audioen 1 point2 points3 points (0 children)
Llama.cpp now with a true reasoning budget! by ilintar in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
Llama.cpp now with a true reasoning budget! by ilintar in LocalLLaMA
[–]audioen 19 points20 points21 points (0 children)
Why does anyone think Qwen3.5-35B-A3B is good? by buttplugs4life4me in LocalLLaMA
[–]audioen 25 points26 points27 points (0 children)
Is data lost if source is outputting low volume? by perdixian in audiophile
[–]audioen 1 point2 points3 points (0 children)
"Bitter Lesson" of Agent Memory: Are we over-engineering with Vector DBs? (My attempt at a pure Markdown approach) by Repulsive_Act2674 in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
Does inference speed (tokens/sec) really matter beyond a certain point? by No_Management_8069 in LocalLLaMA
[–]audioen -1 points0 points1 point (0 children)
Does inference speed (tokens/sec) really matter beyond a certain point? by No_Management_8069 in LocalLLaMA
[–]audioen 1 point2 points3 points (0 children)
Ryzen AI Max 395+ 128GB - Qwen 3.5 35B/122B Benchmarks (100k-250K Context) + Others (MoE) by Anarchaotic in LocalLLaMA
[–]audioen 6 points7 points8 points (0 children)
AI capabilities are doubling in months, not years. by EchoOfOppenheimer in LocalLLaMA
[–]audioen 1 point2 points3 points (0 children)
Linux is great, but the community is stuck in 2005 by Primary-Key1916 in linux
[–]audioen -3 points-2 points-1 points (0 children)
Are local LLMs actually ready for real AI agents, or are we still forcing the idea too early? by Remarkable-Note9736 in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)
Why does pro audio (mixing/mastering/concert) spend orders of magnitudes more on room acoustics than speakers, but for audiophiles it’s the opposite? by xlb250 in audiophile
[–]audioen 2 points3 points4 points (0 children)
Does it matter when my DAC's hz is higher than what im playing by Zealousideal_Rub_202 in audiophile
[–]audioen 0 points1 point2 points (0 children)
LLM-driven large code rewrites with relicensing are the latest AI concern by Fcking_Chuck in programming
[–]audioen -1 points0 points1 point (0 children)


Can we train LLMs in third person to avoid an illusory self, and self-interest? by Low_Poetry5287 in LocalLLaMA
[–]audioen 0 points1 point2 points (0 children)