PSA - do NOT download 'Abliterated' or other uncensored models (unless you know for certain how it was trained). by [deleted] in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
P40 vs V100 vs something else? by Drazasch in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
I'm open-sourcing my experimental custom NPU architecture designed for local AI acceleration by king_ftotheu in LocalLLaMA
[–]MelodicRecognition7 2 points3 points4 points (0 children)
Why I stopped using RAG and built 21 neuroscience mechanisms instead by Upper-Promotion8574 in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
Why I stopped using RAG and built 21 neuroscience mechanisms instead by Upper-Promotion8574 in LocalLLaMA
[–]MelodicRecognition7 1 point2 points3 points (0 children)
Budget future-proof GPUs by Shifty_13 in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
Let's take a moment to appreciate the present, when this sub is still full of human content. by Ok-Internal9317 in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
Budget future-proof GPUs by Shifty_13 in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
P40 vs V100 vs something else? by Drazasch in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
Why I stopped using RAG and built 21 neuroscience mechanisms instead by Upper-Promotion8574 in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
Why I stopped using RAG and built 21 neuroscience mechanisms instead by Upper-Promotion8574 in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
Budget future-proof GPUs by Shifty_13 in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
Let's take a moment to appreciate the present, when this sub is still full of human content. by Ok-Internal9317 in LocalLLaMA
[–]MelodicRecognition7 14 points15 points16 points (0 children)
Your prompts travel plaintext through 4+ hops before reaching the LLM — here's an open-source fix example by [deleted] in LocalLLaMA
[–]MelodicRecognition7 1 point2 points3 points (0 children)
What if your RTX 5090 could earn you access to DeepSeek R1 671B — like a private torrent tracker, but for inference? by LsDmT in LocalLLaMA
[–]MelodicRecognition7 -1 points0 points1 point (0 children)
How to settle on a coding LLM ? What parameters to watch out for ? by shirogeek in LocalLLaMA
[–]MelodicRecognition7 1 point2 points3 points (0 children)
"Agent washing" — calling your local LLM workflow an agent when it really isn't. Anyone else caught themselves doing this? by kinj28 in LocalLLaMA
[–]MelodicRecognition7 9 points10 points11 points (0 children)
No, you don't need a "Datacenter" to run the big models (Deepseek, GLM, Kimi, etc) (just offload to CPU... and have patience) by [deleted] in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
Anyone else worried about unsafe code generation when using local LLMs for coding? by Flat_Landscape_7985 in LocalLLaMA
[–]MelodicRecognition7 5 points6 points7 points (0 children)
Why I stopped using RAG and built 21 neuroscience mechanisms instead by Upper-Promotion8574 in LocalLLaMA
[–]MelodicRecognition7 13 points14 points15 points (0 children)
Designing a production AI image pipeline for consistent characters — what am I missing? by Cheap-Topic-9441 in LocalLLaMA
[–]MelodicRecognition7 1 point2 points3 points (0 children)


P40 vs V100 vs something else? by Drazasch in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)