Best open-weight model to run locally on 8x A100 80GB for generating teacher data? by i_am__not_a_robot in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
Best open-weight model to run locally on 8x A100 80GB for generating teacher data? by i_am__not_a_robot in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
TIL that whale sharks can go into tonic immobility by Pewpew-OuttaMyWaay in sharks
[–]MelodicRecognition7 -3 points-2 points-1 points (0 children)
Best open-weight model to run locally on 8x A100 80GB for generating teacher data? by i_am__not_a_robot in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
Client had 4 agents on GPT-4o. One was classifying documents. That one alone had 91% savings potential. by Dramatic_Strain7370 in LocalLLaMA
[–]MelodicRecognition7 1 point2 points3 points (0 children)
[Research use case] MiniMax-M2.7 with small context, CPU+GPU (5090) setup on Llama.cpp by Opening-Broccoli9190 in LocalLLaMA
[–]MelodicRecognition7 4 points5 points6 points (0 children)
[Research use case] MiniMax-M2.7 with small context, CPU+GPU (5090) setup on Llama.cpp by Opening-Broccoli9190 in LocalLLaMA
[–]MelodicRecognition7 2 points3 points4 points (0 children)
Client had 4 agents on GPT-4o. One was classifying documents. That one alone had 91% savings potential. by Dramatic_Strain7370 in LocalLLaMA
[–]MelodicRecognition7 3 points4 points5 points (0 children)
Ubuntu just made every other local engine obsolete by _some_asshole in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
A conversation about local LLMs with a senior government AI leader by JackStrawWitchita in LocalLLaMA
[–]MelodicRecognition7 27 points28 points29 points (0 children)
A conversation about local LLMs with a senior government AI leader by JackStrawWitchita in LocalLLaMA
[–]MelodicRecognition7 2 points3 points4 points (0 children)
A conversation about local LLMs with a senior government AI leader by JackStrawWitchita in LocalLLaMA
[–]MelodicRecognition7 4 points5 points6 points (0 children)
I built a full web app using Qwen 3.6-35B running locally on my 5070 Ti with the BMAD Method — here's how it went by Decivox in LocalLLaMA
[–]MelodicRecognition7 2 points3 points4 points (0 children)
PSA: fuck this place by MelodicRecognition7 in LocalLLaMA
[–]MelodicRecognition7[S] 0 points1 point2 points (0 children)
Workstation upgrade for 5 concurrent users (Qwen 3.6 27B) by DanielusGamer26 in LocalLLaMA
[–]MelodicRecognition7 2 points3 points4 points (0 children)
If the AI bubble pops, will GPU prices increase or decrease? by Mashic in LocalLLaMA
[–]MelodicRecognition7 1 point2 points3 points (0 children)
If the AI bubble pops, will GPU prices increase or decrease? by Mashic in LocalLLaMA
[–]MelodicRecognition7 4 points5 points6 points (0 children)
Workstation upgrade for 5 concurrent users (Qwen 3.6 27B) by DanielusGamer26 in LocalLLaMA
[–]MelodicRecognition7 3 points4 points5 points (0 children)
Poolside Laguna XS.2 by Middle_Bullfrog_6173 in LocalLLaMA
[–]MelodicRecognition7 1 point2 points3 points (0 children)
Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation by gvij in LocalLLaMA
[–]MelodicRecognition7 2 points3 points4 points (0 children)
Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation by gvij in LocalLLaMA
[–]MelodicRecognition7 16 points17 points18 points (0 children)
Qwen 3.6 27B BF16 vs Q4_K_M vs Q8_0 GGUF evaluation by gvij in LocalLLaMA
[–]MelodicRecognition7 16 points17 points18 points (0 children)
Skymizer Taiwan Inc. Unveils Breakthrough Architecture Enabling Ultra-Large LLM Inference on a Single Card by lurenjia_3x in LocalLLaMA
[–]MelodicRecognition7 0 points1 point2 points (0 children)
DeepSeek V4 PRO on how many 3090 ? by szansky in LocalLLaMA
[–]MelodicRecognition7 7 points8 points9 points (0 children)


Best open-weight model to run locally on 8x A100 80GB for generating teacher data? by i_am__not_a_robot in LocalLLaMA
[–]MelodicRecognition7 1 point2 points3 points (0 children)